Image denoising, in which a noisy image is the input and an image with noise reduced is the output, has always been a central challenge in image processing. Although traditional techniques cannot fully recover noised out pixels of the source image, Dragonfly's Deep Learning approach can accurately distinguish between real image detail and noise. This allows you to remove noise while actually recovering image detail.
Original image (left) and denoised with Noise2Noise_SRResNet model (right)
Acknowledgments: Sample courtesy of Dr Xuejun Sun, University of Alberta, Cross Cancer Institute. Imaged by Rachan Parwani on a ZEISS GeminiSEM 300.
The video below provides an overview for training Deep Models for denoising.
Denoising with Deep Learning (49:29)
You also view this video and others in the Recorded Webinars section on our website (https://www.theobjects.com/dragonfly/learn-recorded-webinars.html).
The following topics are discussed in the video. Links to Help topics with further information are also provided.
The following items are required for training a Deep Learning model for denoising:
You should note that you can extract training data as a subset of the original data (see Extracting New Images from Marked Slices).
A selection of untrained models suitable for regression are supplied with the Deep Learning Tool (see Deep Learning Architectures). You can also download models from the Infinite Toolbox (see Infinite Toolbox), or import models from Keras.
The following items are optional for training a Deep Learning model for denoising:
You use different strategies for denoising by pairing different inputs to outputs and to then train a Deep Model to predict the original without noise. These include pairing a high-noise input with a low-noise output, a high-noise input with a hIgh-noise output, or a source image with noise to a source image with noise.
High-Noise/Low-Noise… In this intuitive approach, a high-noise input image is paired with a low-noise output image to train a model. Data can be acquired experimentally by capturing short-duration and long-duration images or by creating synthetic low-noise images through image filtering (see Smoothing Filters).
High-Noise/High-Noise… This approach takes advantage of the fact that neural networks are not good at reproducing noise. In this case, a high-noise image provides both the input and output.
Noise-to-Noise…Here, a source image with added noise is paired with another version of the source image with added noise. In this case, the signal common to both the input and output images will be extracted with moderate iterations. You should note Dragonfly provides an additive noise filter for adding noise to images.
To help monitor and evaluate the progress of training Deep Learning models, you can designate a 2D rectangular region for visual feedback. With the Visual Feedback option selected, the model’s inference will be displayed in the Training dialog in real time as each epoch is completed, as shown on the right of the screen capture below. In addition, you can create a checkpoint cache so that you can save a copy of the model at a selected checkpoint (see Enabling Checkpoint Caches and Loading and Saving Model Checkpoints). Saved checkpoints are marked in bold on the plotted graph, as shown below.
Training dialog
Note Any region that you define for visual feedback should not overlap the training data.

Note The visual feedback image for each epoch is saved during model training. You can review the result of each epoch by scrolling through the plotted graph. If the checkpoint cache is enabled, you can also save the model at a selected checkpoint when you review the training results (see Loading and Saving Model Checkpoints).
Dragonfly's Deep Learning Tool provides a number of architectures — including Autoencoder, Noise2Noise, Noise2Noise_SRResNet, U-Net, U-Net 3D, and U-Net++ — that are suitable for denoising tasks.
The Deep Learning Tool dialog appears.
The Model Generator dialog appears (see Model Generator for additional information about the dialog).
This will filter the available architectures to those recommended for denoising.

Note A description of each architecture is available in the Architecture description box, along with a link for more detailed information (see also ).
Recommendation Your architecture selection should correspond to your chosen denoising strategy — High-Noise/Low-Noise, High-Noise/HIgh-Noise, or Noise-to-Noise (see Denoising Strategies).

Note In most cases, denoising strategies require an Input count of 1.

Note Refer to Editable Parameters for Deep Learning Architectures for information about the settings available in the Model Generator dialog.
After processing is complete, a confirmation message appears.
Information about the loaded model appears in the dialog (see Details), while a graph view of the data flow is available on the Model Editing panel (see Model Editing Panel).
You can start training a model for denoising after you have prepared your training input(s) and output(s), as well as any required masks (see Prerequisites).
To open the Deep Learning Tool, choose Artificial Intelligence > Deep Learning Tool on the menu bar.
Information about the model appears in the Model information box (see Details).
Note In most cases, you should be able to train a denoising model supplied with the Deep Learning Tool as is, without making changes to its architecture.
The Model Training panel appears (see Model Training Panel).
Note If you chose to train your model in 3D, then additional options will appear for the input, as shown below. See Configuring Multi-Slice Inputs for information about selecting reference slices and spacing values.

Note If your model requires multiple inputs, select the additional input(s), as required.

Note If you are training with multiple training sets, click the Add New
button and then choose the required input(s), output, and mask for the additional item(s).

In most cases, you can deselect Generate additional training data by augmentation.
In most cases, you should increase the Patch size as much as possible. In addition, the MeanSquareError loss function usually provides good results.

See Basic Settings for information about choosing the patch size, stride ration, batch size, epochs number, loss function, and optimization algorithm.
Note You should monitor the estimated memory ratio when you choose the training parameter settings. The ratio should not exceed 1.00 (see Estimated Memory Ratio).
You should note that this step is optional and that these settings can be adjusted after you have evaluated the initial training results.
You can monitor the progress of training in the Training dialog, which is shown below.
During training, the quantities 'loss' and 'val_loss' should decrease. You should continue to train until 'val_loss' stops decreasing. You can also select any of the other available metrics to monitor training progress.
Note You can also click the List tab and then review the precise values for each epoch.
Note Refer to the topic Enabling Visual Feedback and Checkpoint Caches for information about visual feedback and checkpoint caches.
Note The measure of good denoising is how well detail is preserved while noise is removed.
Note If your results continue to be unsatisfactory, you might consider choosing another architecture.